import lime
import sklearn
import numpy as np
import sklearn
import sklearn.ensemble
import sklearn.metrics
In the previous tutorial, we looked at lime in the two class case. In this tutorial, we will use the 20 newsgroups dataset again, but this time using all of the classes.
from sklearn.datasets import fetch_20newsgroups
newsgroups_train = fetch_20newsgroups(subset='train')
newsgroups_test = fetch_20newsgroups(subset='test')
# making class names shorter
class_names = [x.split('.')[-1] if 'misc' not in x else '.'.join(x.split('.')[-2:]) for x in newsgroups_train.target_names]
print ','.join(class_names)
Again, let's use the tfidf vectorizer, commonly used for text.
vectorizer = sklearn.feature_extraction.text.TfidfVectorizer(lowercase=False)
train_vectors = vectorizer.fit_transform(newsgroups_train.data)
test_vectors = vectorizer.transform(newsgroups_test.data)
This time we will use Multinomial Naive Bayes for classification, so that we can make reference to this document.
from sklearn.naive_bayes import MultinomialNB
c = MultinomialNB(alpha=.01)
c.fit(train_vectors, newsgroups_train.target)
pred = c.predict(test_vectors)
sklearn.metrics.f1_score(newsgroups_test.target, pred, average='weighted')
We see that this classifier achieves a very high F score. The sklearn guide to 20 newsgroups indicates that Multinomial Naive Bayes overfits this dataset by learning irrelevant stuff, such as headers, by looking at the features with highest coefficients for the model in general. We now use lime to explain individual predictions instead.
from lime import lime_text
explainer = lime_text.LimeTextExplainer(vocabulary=vectorizer.vocabulary_, class_names=class_names)
Previously, we used the default parameter for label when generating explanation, which works well in the binary case.
For the multiclass case, we have to determine for which labels we will get explanations, via the 'labels' parameter.
Below, we generate explanations for labels 1,2,3, 5 and 14.
idx = 1340
exp = explainer.explain_instance(test_vectors[idx], c.predict_proba, num_features=6, labels=[1,2,3,5,14])
print 'Document id: %d' % idx
print 'Predicted class =', class_names[c.predict(test_vectors[idx])]
print 'True class: %s' % class_names[newsgroups_test.target[idx]]
Now, we can see the explanations for different labels. Notice that the positive and negative signs are with respect to a particular label - so that words that are negative towards class 1 may be positive towards class 14.
print 'Explanation for class %s' % class_names[1]
print '\n'.join(map(str, exp.as_list(label=1)))
print
print 'Explanation for class %s' % class_names[14]
print '\n'.join(map(str, exp.as_list(label=14)))
Another alternative is to ask LIME to generate labels for the top $K$ classes. This is shown below with $K=5$.
To see which labels have explanations, use the available_labels function.
exp = explainer.explain_instance(test_vectors[idx], c.predict_proba, num_features=6, top_labels=5)
print exp.available_labels()
Now let's see some the explanation for the top class, with the associated text.
from IPython.core.display import display, HTML
print 'Explaining class atheism'
display(HTML(exp.as_html(text=newsgroups_test.data[idx], label=0, include=['predict_proba', 'pos', 'neg', 'local'])))
We notice that the classifier is using reasonable words (such as 'Theism', 'Semitic', etc), as well as unreasonable ones ('Rice', 'owlnet').
Let's see the explanation for 'atheism', 'christian', and 'mideast', without the associated text
print 'Explaining class atheism'
display(HTML(exp.as_html(label=0)))
print
print 'Explaining class christian'
display(HTML(exp.as_html(label=15)))
print
print 'Explaining class mideast'
display(HTML(exp.as_html(label=17)))
We notice that looking at the explanations for different classes can lead to different insights.
Finally, we follow the suggestion of removing headers, footers and quotes, and explain the same example with the new data.
newsgroups_train = fetch_20newsgroups(subset='train', remove=('headers', 'footers', 'quotes'))
newsgroups_test = fetch_20newsgroups(subset='test',remove=('headers', 'footers', 'quotes'))
train_vectors = vectorizer.fit_transform(newsgroups_train.data)
test_vectors = vectorizer.transform(newsgroups_test.data)
c = MultinomialNB(alpha=.01)
c.fit(train_vectors, newsgroups_train.target)
explainer = lime_text.LimeTextExplainer(vocabulary=vectorizer.vocabulary_, class_names=class_names)
exp = explainer.explain_instance(test_vectors[idx], c.predict_proba, num_features=6, top_labels=5)
print exp.available_labels()
Notice how different the explanations are for the classifier without headers, footers and quotes. The prediction changes, but so do the reasons.
print 'Explaining class atheism'
display(HTML(exp.as_html(label=0)))
print
print 'Explaining class christian'
display(HTML(exp.as_html(label=15)))
print
print 'Explaining class mideast'
display(HTML(exp.as_html(label=17)))
Let's see the explanation with the text for the top class (christian):
from IPython.core.display import display, HTML
print 'Explaining class christian'
display(HTML(exp.as_html(text=newsgroups_test.data[idx], label=15, include=['predict_proba', 'pos', 'neg', 'local'])))
Notice how short the text became after removing all of that information. One begins to wonder if this version of the dataset is still useful, or if it is better to find another dataset altogether.
Anyway, I hope this illustrated how to use LIME to explain arbitrary classifiers in the multiclass case!